Market Roundup

January 5, 2007

Virtualization and ILM 2006: Looking Back

Compliance 2007: The King’s SOX Had Holes

Power, Cooling, and the Data Center

Service Management 2007: A Bright New Dawn for Business or The Cold Light of Day for CIOs?

 


Virtualization and ILM 2006: Looking Back

By Joyce Tompsett Becknell

One of the hottest topics in computing in 2006 was virtualization. Like many other trends before it, it had many definitions, many disguises, and the FUD factor was significant. Some advances were made, much confusion was added to the system by companies jumping on bandwagons or squandering precious marketing time wandering about the weeds of technical details, but some valuable ground was gained as well. This piece is not meant to be a detailed analysis of the year that was, but a way to look at how we got to today and what we expect for the coming year.

First there are two main areas of virtualization from a systems viewpoint. One is storage virtualization, which involves storage area networks (SANs), network attached storage (NAS), and various virtualization bits by storage companies. It also increasingly includes software. The other main area is system virtualization, which includes virtualization, of parts or all of a system, whether that is a client or a server. Much of the fuss around information lifecycle management (ILM) has died down, and several companies have dropped or scaled back messaging around the concept, ironically so as many of the breakthroughs this year actually got us closer to being able to realize ILM visions. Perhaps it’s just as well, though, as ILM meant something a little different to everyone who thought about it.

The two big areas of growth around virtualization in 2006 were software and management. One assumes that hardware is part of the picture, but although companies like Intel and AMD continue to make their products more accessible to various virtualization schemes, the real news was what vendors were doing with software and management capabilities.

All the usual suspects made announcements this year, including HP, IBM, HDS, and EMC, with EMC doing an awful lot of interesting things with Rainfinity and file virtualization as well as with Documentum, not to mention the popularity of VMware as a way to create virtual servers, and also providing other products such as ACE which is similar to Rainfinity. Microsoft created news by making its virtualization format technology available under its Open Specification Promise (OSP).

In general, the two areas for virtualization are deployment and management. Under the first area, getting everything to work together is important and sometimes a challenge. One of the chief reasons for virtualizing is to be able to use software on a platform other than that for which it was designed. If one cannot bring multiple platforms together, then virtualization is of limited efficacy and loses much of its appeal. Additionally, if overhead slows performance or the products have trouble scaling then this will limit the uptake of virtualization technologies. We expect the vendors will spend time making more devices and more versions of devices work together and we expect the scalability issue to be addressed. We also expect to hear an awful lot from the power and cooling lot this year as more efficient use of resources (another intended benefit of virtualization) is becoming more critical to many companies.

Scalability is usually an issue for large and growing installations. Hand in hand with that, management also becomes important. Managing multiple devices is important not only from the IT manager’s point of view, but it is also important from the business view. Policy-based automation governed by business rules is the goal, and that means having good reporting capabilities as well as audit capabilities, and equally as important, good security. The industry in general has treated reporting as a secondary feature, but the importance of compliance and governance is driving these features to the top. Those virtualization providers that still spend a lot of time talking about technical features will find themselves rewriting presentations to address these issues if they haven’t done so already.

On the competitive front, we anticipate that Microsoft will decide that since it now understands some level of virtualization, it has solved the universe’s problems again. Just as all new fathers believe that they have reinvented fatherhood, so Microsoft will have its first hypervisor this year. It should be interesting to see how the market adapts their technology along with all the other new products coming out of Redmond this year.

Competitively, we expect to see all the management vendors jumping into the act. BMC and CA are mightily interested in management as is Cisco, who we’ve mentioned several times as having an interest in becoming the core technology. Certainly there is virtualization in the network as well, and certainly if there’s a network, Cisco will not be far behind.

In 2007 virtualization will continue to normalize across more areas. SANs and NAS have been common for years, and VMWare is no longer the disruptive technology it once was. Other forms of virtualization are just emerging into the mainstream however, particularly at the software level. Although we’ve spoken mostly about systems virtualization, application virtualization also exists. Companies such as Veritas and Sun have created application virtualization offerings that have matured in 2006, and Softricity’s Softgrid did application virtualization for the desktop.

We also anticipate a lot of concerns around virtualization security, particularly as it grows in the Windows space. The issues of how to use virtualization to secure platforms will grow; an example of that in the form of virtual software could take off if Microsoft can leverage the SoftGrid technology it got from its purchase of Softricity in 2006. There will also be issues of how secure virtual platforms are and we expect to see the security vendors leaping out.

Finally, there are the financial issues. Software licensing based on processor gets substantially harder to do in a virtualized environment, particularly one in which virtual machines are created and destroyed on a regular basis. We anticipate license negotiations as customers and vendors work out new deals for changing infrastructures. Programs from companies like CA which offer leases on software may not be appreciated by the financial analysts, but from an IT manager’s perspective, they make good sense and provide both CA and the customer with maximum flexibility.

Compliance 2007: The King’s SOX Had Holes

By Larry Dietz

For most of 2006 the term compliance was synonymous with the dreaded U.S. Sarbanes Oxley (SOX) law. The overwhelming majority of large organizations, especially multi-nationals, found themselves spending oodles of money on myriad projects all earmarked as necessary for “SOX Compliance.” Major trading countries and regions all took notice of how U.S. organizations were scurrying about trying to ensure that their top executives would not be clad in orange jumpsuits and headed to jail. Some countries such as Japan decided to go on the offensive and put the world on notice that they too would be passing legislation designed to bolster investor confidence and mend the sins of past malfeasance on the part of several executives and organizations.

Organizations have been facing a maze of regulations for quite some time; furthermore, it was not uncommon for the regulations to be technology-neutral in their guidance and perhaps even conflicting in their requirements. Laws and regulations could be based on the jurisdiction: federal (country level), state or provincial, or even municipal. Examples include the California disclosure law popularly known as SB 1386 and the Canadian Personal Information Protection Electronic Documents Act (PIPEDA). Organizations also found that they would be subject to regulations based on their size (revenue, market capitalization, number of employees), or their industry. For example, in health care there is HIPAA, more properly known as the Health Insurance Portability and Accountability Act of 1996; in financial services there is GLBA, or the Gramm Leach Bliley act of 1996; and for power and energy there are regulations promulgated by the North American Electric Reliability Council (NERC) that effect Canada, Mexico, and the United States.

An unintended result of this web of regulations is that top management is not necessarily totally clear on what the organization must do in terms of personnel issues, policies, and procedures. This can leave IT as the tail on the business dog. Top management must clearly describe business goals and objectives so that IT can implement them. In the case of compliance, IT is not able to sequentially address each and every rule, regulation, and law. Rather IT must employ IT as a tool for governance of the organization. IT, information security, and privacy technology in particular can be employed to enforce standards within the operation of the organization. These standards when taken together will ensure that the IT infrastructure the organization and its top management rely on to provide accurate and current information actually does so. IT can also be judiciously employed to ensure that the organization can function in spite of unforeseen interruptions whether they are acts of nature, intentional acts by adversaries, or accidents.

We are cautiously optimistic about the compliance outlook for 2007. We feel fairly confident in saying that U.S. law makers have been made aware of the negative effects of SOX and have hopefully taken notice of heightened IPO activity in financial markets outside the U.S. such as Hong Kong. This is likely to translate to a loosening of the perceived SOX stranglehold. The lack of a successful SOX prosecution may also be a factor emboldening executives to take a commonsense approach to running the organization which entails stated goals and objectives with respect to governance and which translates business objectives into IT standards, policies, and procedures that ensure the integrity of the IT infrastructure, which was the core intent of SOX in the first place.

Power, Cooling, and the Data Center

By Clay Ryder

In the future, we may look back on 2006 as the year that power consumption, cooling, and energy efficiency in the data center ceased being a back-burner issue for IT and facilities managers and elevated itself to become one of the forefront, if not leading, issues for many. While those “in the know” have always been aware of HVAC and power distribution limitations, until recently it has not been a noticeable issue. During the past several months, we have seen vendors focus on the issue of energy efficiency through various initiatives including HP’s Smart Cooling, EMC’s Energy Efficiency Tool, Sun’s Cool Threads, the latest Energy Star Specification, and VMware’s energy utility rebates. With the competitive attention now being brought to bear, we expect to see this topic remain at an elevated level during 2007 as vendors line up their competitive differentiation and definitions of what exactly energy efficiency is all about.

Although much of the cost cutting and resource gutting by CIOs and CFOs during the first part of the twenty-first century focused on infrastructure consolidation and headcount reductions, it didn’t take too long for the impact of $75/bbl oil and 22˘/kwh electricity to reach into the data center. At the same time, rather ironically, all the focus on server and storage consolidation combined with ever denser form factors such as blades has changed the heat generation and dissipation characteristics of the data center. Thus, the limitation of physical reality has once again impeded progress in our collective journey to a virtual IT existence. Yet there are many similarities and lessons to be drawn from the “consolidate, simplify, and virtualize” mantra of the past few years. Just as inefficiencies in server and storage utilization have led to consolidations featuring closely monitored virtualization schemes, we are now witnessing the same opportunity with cooling and power consumption.

Over-provisioning of cooling and power is inherently just as inefficient as over-provisioning anything else. If machine rooms are continuously cooled to meet peak loads, then a lot of kilowatt-hours are going to waste. Likewise, if the actual power being drawn by equipment is less than the wiring supports and designed to a worst-case scenario that is unlikely to be achieved, there will be unnecessary breaker panels and conduit being installed. From a financial and operational perspective, targeting the cooling where it is needed, only when it is needed, just makes sense; anything emitted beyond this is simply waste that impacts the bottom line of the business. Similarly, electrical circuits that are underutilized represent an underperforming investment.

Although initiatives in the marketplace vary in their impact, we believe that the players who can provide dynamic realtime monitoring and control of the power consumption and cooling envelopes will be the long-term winners in this space. At present HP probably has the most comprehensive offering; however, other vendors certainly have many requisite pieces of the puzzle and one cannot overlook the expertise of IBM’s Global Services to pull together just about any solution given enough money. At a minimum, a combination of systems management, facilities management, myriad sensors, and realtime data acquisition and control software will be required to achieve enhanced data center power and cooling efficiency. In addition, the knowledge, planning, and wherewithal to pull this together cannot be underestimated. But despite the higher barrier to entry to play effectively in this space, we believe the opportunity is too great for most systems or management vendors to overlook.

For systems management vendors such as HP, IBM, BMC, CA and others, the myriad sensors necessary to monitor environmental conditions represent an opportunity to extend management solutions to reach beyond the traditional bounds of IT. The dividing line between IT and facilities is clearly blurring in the data center. This disruption in thinking highlights the latent opportunity.

We expect to see more initiatives where vendors such as Sun, VMware, EMC, IBM, et al will work with utilities to implement creative programs to help organizations reduce data center power consumption. Besides reducing the power bill, the reduced demand for data center power forestalls the need for additional generation capacity on the power grid and is a win-win scenario for the utility and ratepayers alike. In addition, environmental factors such as air quality and ambient noise levels will likely emerge as drivers as well, as organizations rationalize and change how they approach office/work space internally.

Organizations will reap these benefits incrementally as they refresh their technology over its lifecycle, and in some cases, the savings might encourage earlier refresh of equipment that may still be functional, but less efficient. Of course, a significant upgrade of the data center as a whole would bring more ROI sooner. If power utilities were to embrace power savings programs for computer technologies, like many do with older household appliances, lighting, heating, and cooling equipment, the potential to enhance the energy efficiency of the data center would grow significantly. Hopefully, in 2007 this is exactly the kind of market place behavior we will see as chipmakers, drive manufacturers, systems vendors, storage specialists, systems management companies, and utilities all work towards improving the efficiency of the physical operation of the data center.

Service Management 2007: A Bright New Dawn for Business or The Cold Light of Day for CIOs?

By Tony Lock

Over the last twelve months the area of “IT Service Management” has received an awful lot of attention. It is a sector in which both the large, wellestablished systems management vendors and new startups have expended considerable time, money, and intellectual resources. To understand why so many vendors, large and small, are focusing their efforts in this space, it is necessary to look at where IT infrastructure management has come from and to understand where inexorable business concerns are now forcing IT managers to concentrate.

The commercial pressures facing every organization have fundamentally altered the way IT is now regarded. In the past IT was almost considered to be a “necessary evil” that no one outside of IT understood, or wanted to understand. Today that picture no longer has any credibility. For the majority of enterprises IT is now established as a major, and often the primary, factor in business success. However, as commercial pressures have exerted themselves on organizations the pressure to control costs and, more recently, to ensure that all resources, especially IT resources, are well managed, cost-efficient and delivering maximum business benefit has become the normal modus operandi.

For IT this translates into pressure to demonstrate that IT systems are procured at minimum cost and operate effectively in line with business service level requirements. These requirements essentially require that systems management be utilized to keep business services running and that IT provides detailed information on exactly how IT budget spending maps onto business needs. This is where Business Service Management tools and process come into play.

2006 saw essential developments in the capabilities of Service Management tools, especially in the increasingly important area of “reporting.” These tools are beginning to supply much greater transparency and ultimately better understanding of where IT spends its money and how these stack up against business “needs.” This transparency should not be taken as a threat either to CIOs or to the professionals that today work long hours to keep systems operational, typically without adequate recognition. Quite the opposite. Transparency gives IT an opportunity to highlight the value that IT delivers to the top line instead of defending every service interruption, however minor.

The Service Management approach to running IT is here to stay and will eventually deliver value to the business itself and to those charged with delivering the IT services on which the organization depends but of which it is often, literally, in the dark. As IT systems become inherently more flexible with the rapid advance of “virtualization capabilities” IT will be better able to cope with rapidly changing business demands for service. Service Management offers a credible means to ensure that resources are utilized effectively.

Service Management also delivers IT a chance to take its rightful place at the forefront of business rather than being invisible in the data center. We confidently expect that 2007 will see the technologies that support the Service Management approach to IT service delivery add vital capabilities. Further, we anticipate that organizations will themselves begin to fully understand the value that using such tools well, coupled with good IT/Business Service Management practices, can deliver. 2007 is the year Service Management will finally move off the theoretical drawing board and begin to take hold at the heart of IT Infrastructure management and IT risk administration. Service Management will not happen overnight, but it will become firmly established in core areas.


The Sageza Group, Inc.

32108 Alvarado Blvd #354

Union City, CA 94587

510·675·0700 fax 650·649·2302

London +44 (0) 20·7900·2819

Milan +39 02·9544·1646

 

sageza.com

 

Copyright © 2007 The Sageza Group, Inc. May not be duplicated or retransmitted without written permission.